We consider a novel formulation of the multi-armed bandit model, which wecall the contextual bandit with restricted context, where only a limited numberof features can be accessed by the learner at every iteration. This novelformulation is motivated by different online problems arising in clinicaltrials, recommender systems and attention modeling. Herein, we adapt thestandard multi-armed bandit algorithm known as Thompson Sampling to takeadvantage of our restricted context setting, and propose two novel algorithms,called the Thompson Sampling with Restricted Context(TSRC) and the WindowsThompson Sampling with Restricted Context(WTSRC), for handling stationary andnonstationary environments, respectively. Our empirical results demonstrateadvantages of the proposed approaches on several real-life datasets
展开▼